skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Mossel, Elchanan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. ABSTRACT A key set‐theoretic “spread” lemma has been central to two recent celebrated results in combinatorics: the recent improvements on the sunflower conjecture by Alweiss, Lovett, Wu, and Zhang; and the proof of the fractional Kahn–Kalai conjecture by Frankston, Kahn, Narayanan, and Park. In this work, we present a new proof of the spread lemma, that—perhaps surprisingly—takes advantage of an explicit recasting of the proof in the language of Bayesian inference. We show that from this viewpoint the reasoning proceeds in a straightforward and principled probabilistic manner, leading to a truncated second moment calculation which concludes the proof. 
    more » « less
    Free, publicly-accessible full text available July 1, 2026
  2. Free, publicly-accessible full text available May 1, 2026
  3. Free, publicly-accessible full text available May 15, 2026
  4. Free, publicly-accessible full text available December 15, 2025
  5. Free, publicly-accessible full text available February 1, 2026
  6. We study the inference of network archaeology in growing random geometric graphs. We consider the root finding problem for a random nearest neighbor tree in dimension d∈N, generated by sequentially embedding vertices uniformly at random in the d-dimensional torus and connecting each new vertex to the nearest existing vertex. More precisely, given an error parameter ε>0 and the unlabeled tree, we want to efficiently find a small set of candidate vertices, such that the root is included in this set with probability at least 1−ε. We call such a candidate set a confidence set. We define several variations of the root finding problem in geometric settings -- embedded, metric, and graph root finding -- which differ based on the nature of the type of metric information provided in addition to the graph structure (torus embedding, edge lengths, or no additional information, respectively). We show that there exist efficient root finding algorithms for embedded and metric root finding. For embedded root finding, we derive upper and lower bounds (uniformly bounded in n) on the size of the confidence set: the upper bound is subpolynomial in 1/ε and stems from an explicit efficient algorithm, and the information-theoretic lower bound is polylogarithmic in 1/ε. In particular, in d=1, we obtain matching upper and lower bounds for a confidence set of size Θ(log(1/ε)loglog(1/ε)). 
    more » « less
    Free, publicly-accessible full text available November 21, 2025
  7. We study processes of societal knowledge accumulation, where the validity of a new unit of knowledge depends both on the correctness of its derivation and on the validity of the units it depends on. A fundamental question in this setting is: If a constant fraction of the new derivations is wrong, can investing a constant fraction, bounded away from one, of effort ensure that a constant fraction of knowledge in society is valid? Ben-Eliezer, Mikulincer, Mossel, and Sudan (ITCS 2023) introduced a concrete probabilistic model to analyze such questions and showed an affirmative answer to this question. Their study, however, focuses on the simple case where each new unit depends on just one existing unit, and units attach according to a {\em preferential attachment rule}. In this work, we consider much more general families of cumulative knowledge processes, where new units may attach according to varied attachment mechanisms and depend on multiple existing units. We also allow a (random) fraction of insertions of adversarial nodes. We give a robust affirmative answer to the above question by showing that for \textit{all} of these models, as long as many of the units follow simple heuristics for checking a bounded number of units they depend on, all errors will be eventually eliminated. Our results indicate that preserving the quality of large interdependent collections of units of knowledge is feasible, as long as careful but not too costly checks are performed when new units are derived/deposited. 
    more » « less
  8. We study processes of societal knowledge accumulation, where the validity of a new unit of knowledge depends both on the correctness of its derivation and on the validity of the units it depends on. A fundamental question in this setting is: If a constant fraction of the new derivations is wrong, can investing a constant fraction, bounded away from one, of effort ensure that a constant fraction of knowledge in society is valid? Ben-Eliezer, Mikulincer, Mossel, and Sudan (ITCS 2023) introduced a concrete probabilistic model to analyze such questions and showed an affirmative answer to this question. Their study, however, focuses on the simple case where each new unit depends on just one existing unit, and units attach according to a {\em preferential attachment rule}. In this work, we consider much more general families of cumulative knowledge processes, where new units may attach according to varied attachment mechanisms and depend on multiple existing units. We also allow a (random) fraction of insertions of adversarial nodes. We give a robust affirmative answer to the above question by showing that for \textit{all} of these models, as long as many of the units follow simple heuristics for checking a bounded number of units they depend on, all errors will be eventually eliminated. Our results indicate that preserving the quality of large interdependent collections of units of knowledge is feasible, as long as careful but not too costly checks are performed when new units are derived/deposited. 
    more » « less
  9. In widely used models of biological contagion, interventions that randomly rewire edges (generally making them 'longer') accelerate spread. However, recent work has argued that highly clustered, rather than random, networks facilitate the spread of threshold-based contagions, such as those motivated by myopic best response for adoption of new innovations, norms and products in games of strategic complement. Here we show that minor modifications to this model reverse this result, thereby harmonizing qualitative facts about how network structure affects contagion. We analyse the rate of spread over circular lattices with rewired edges and show that having a small probability of adoption below the threshold probability is enough to ensure that random rewiring accelerates the spread of a noisy threshold-based contagion. This conclusion is verified in simulations of empirical networks and remains valid with partial but frequent enough rewiring and when adoption decisions are reversible but infrequently so, as well as in high-dimensional lattice structures. 
    more » « less